Goto

Collaborating Authors

 mass density


Supplementary Material: Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses

Neural Information Processing Systems

M-SYNTH is organized into a directory structure that indicates the parameters. Code and dataset is released with the Creative Commons 1.0 Universal License We now review the timing required to perform mass insertion and imaging. In Table 2, we review the imaging time required for each breast density. The time varies from 2.84 GPU), we were able to generate the complete dataset in about two weeks.Breast Density Time (min) Fatty 13.463809 Scattered 11.002291 Hetero 3.655613 Dense 2.842028 Table 2: Timing analysis for imaging by breast density. Additional renderings of the breast phantoms generated for the study are shown in Figure 1, demonstrating a high level of detail and anatomical variability within and among models.



AIonopedia: an LLM agent orchestrating multimodal learning for ionic liquid discovery

Yin, Yuqi, Fu, Yibo, Wang, Siyuan, Sun, Peng, Wang, Hongyu, Wang, Xiaohui, Zheng, Lei, Li, Zhiyong, Liu, Zhirong, Wang, Jianji, Sun, Zhaoxi

arXiv.org Artificial Intelligence

The discovery of novel Ionic Liquids (ILs) is hindered by critical challenges in property prediction, including limited data, poor model accuracy, and fragmented workflows. Leveraging the power of Large Language Models (LLMs), we introduce AIonopedia, to the best of our knowledge, the first LLM agent for IL discovery. Powered by an LLM-augmented multimodal domain foundation model for ILs, AIonopedia enables accurate property predictions and incorporates a hierarchical search architecture for molecular screening and design. Trained and evaluated on a newly curated and comprehensive IL dataset, our model delivers superior performance. Complementing these results, evaluations on literature-reported systems indicate that the agent can perform effective IL modification. Moving beyond offline tests, the practical efficacy was further confirmed through real-world wet-lab validation, in which the agent demonstrated exceptional generalization capabilities on challenging out-of-distribution tasks, underscoring its ability to accelerate real-world IL discovery.


Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses

Neural Information Processing Systems

Precise mass location and extent (e.g., mass boundaries) are typically not available in the patient's records, and it is burdensome, error-prone, and sometimes impossible to


Supplementary Material: Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses

Neural Information Processing Systems

M-SYNTH is organized into a directory structure that indicates the parameters. Code and dataset is released with the Creative Commons 1.0 Universal License We now review the timing required to perform mass insertion and imaging. In Table 2, we review the imaging time required for each breast density. The time varies from 2.84 GPU), we were able to generate the complete dataset in about two weeks.Breast Density Time (min) Fatty 13.463809 Scattered 11.002291 Hetero 3.655613 Dense 2.842028 Table 2: Timing analysis for imaging by breast density. Additional renderings of the breast phantoms generated for the study are shown in Figure 1, demonstrating a high level of detail and anatomical variability within and among models.


Collision-Aware Traversability Analysis for Autonomous Vehicles in the Context of Agricultural Robotics

Philippe, Florian, Laconte, Johann, Lapray, Pierre-Jean, Spisser, Matthias, Lauffenburger, Jean-Philippe

arXiv.org Artificial Intelligence

In this paper, we introduce a novel method for safe navigation in agricultural robotics. As global environmental challenges intensify, robotics offers a powerful solution to reduce chemical usage while meeting the increasing demands for food production. However, significant challenges remain in ensuring the autonomy and resilience of robots operating in unstructured agricultural environments. Obstacles such as crops and tall grass, which are deformable, must be identified as safely traversable, compared to rigid obstacles. To address this, we propose a new traversability analysis method based on a 3D spectral map reconstructed using a LIDAR and a multispectral camera. This approach enables the robot to distinguish between safe and unsafe collisions with deformable obstacles. We perform a comprehensive evaluation of multispectral metrics for vegetation detection and incorporate these metrics into an augmented environmental map. Utilizing this map, we compute a physics-based traversability metric that accounts for the robot's weight and size, ensuring safe navigation over deformable obstacles.


Physical Property Understanding from Language-Embedded Feature Fields

Zhai, Albert J., Shen, Yuan, Chen, Emily Y., Wang, Gloria X., Wang, Xinlei, Wang, Sheng, Guan, Kaiyu, Wang, Shenlong

arXiv.org Artificial Intelligence

Can computers perceive the physical properties of objects solely through vision? Research in cognitive science and vision science has shown that humans excel at identifying materials and estimating their physical properties based purely on visual appearance. In this paper, we present a novel approach for dense prediction of the physical properties of objects using a collection of images. Inspired by how humans reason about physics through vision, we leverage large language models to propose candidate materials for each object. We then construct a language-embedded point cloud and estimate the physical properties of each 3D point using a zero-shot kernel regression approach. Our method is accurate, annotation-free, and applicable to any object in the open world. Experiments demonstrate the effectiveness of the proposed approach in various physical property reasoning tasks, such as estimating the mass of common objects, as well as other properties like friction and hardness.


Calibrated and Enhanced NRLMSIS 2.0 Model with Uncertainty Quantification

Licata, Richard J., Mehta, Piyush M., Weimer, Daniel R., Tobiska, W. Kent, Yoshii, Jean

arXiv.org Artificial Intelligence

The Mass Spectrometer and Incoherent Scatter radar (MSIS) model family has been developed and improved since the early 1970's. The most recent version of MSIS is the Naval Research Laboratory (NRL) MSIS 2.0 empirical atmospheric model. NRLMSIS 2.0 provides species density, mass density, and temperature estimates as function of location and space weather conditions. MSIS models have long been a popular choice of atmosphere model in the research and operations community alike, but - like many models - does not provide uncertainty estimates. In this work, we develop an exospheric temperature model based in machine learning (ML) that can be used with NRLMSIS 2.0 to calibrate it relative to high-fidelity satellite density estimates. Instead of providing point estimates, our model (called MSIS-UQ) outputs a distribution which is assessed using a metric called the calibration error score. We show that MSIS-UQ debiases NRLMSIS 2.0 resulting in reduced differences between model and satellite density of 25% and is 11% closer to satellite density than the Space Force's High Accuracy Satellite Drag Model. We also show the model's uncertainty estimation capabilities by generating altitude profiles for species density, mass density, and temperature. This explicitly demonstrates how exospheric temperature probabilities affect density and temperature profiles within NRLMSIS 2.0. Another study displays improved post-storm overcooling capabilities relative to NRLMSIS 2.0 alone, enhancing the phenomena that it can capture.